Skip to content

[pull] master from buildroot:master#921

Merged
pull[bot] merged 4 commits intomir-one:masterfrom
buildroot:master
Mar 19, 2026
Merged

[pull] master from buildroot:master#921
pull[bot] merged 4 commits intomir-one:masterfrom
buildroot:master

Conversation

@pull
Copy link

@pull pull bot commented Mar 19, 2026

See Commits and Changes for more details.


Created by pull[bot] (v2.0.0-alpha.4)

Can you help keep this open source service alive? 💖 Please sponsor : )

cp0613 and others added 4 commits March 19, 2026 21:45
A "device memory" enabling project encompassing tools and
libraries for CXL, NVDIMMs, DAX, memory tiering and other
platform memory device topics.

ndctl is using __struct_group() [1] which was introduced in
kernel headers in upstream commit [2], first included in v5.16.
The commit [2] was backported in v5.15.54 in [3] and v5.10.156
in [4]. Therefore, this commits sets the minimal toolchain headers
version requirement to 5.10.

[1] https://github.com/pmem/ndctl/blob/v83/cxl/fwctl/features.h#L108
[2] https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id=50d7bd38c3aafc4749e05e8d7fcb616979143602
[3] https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id=d57ab893cdf8046cbe4d49746f9418020f788b1f
[4] https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id=9fd7bdaffe0e89833f4b1c1d3abd43023e951ec1

Signed-off-by: Chen Pei <cp0613@linux.alibaba.com>
[Julien:
  - add commit log info about __struct_group()
  - add __struct_group() comment in Config.in
  - relax toolchain headers requirements to 5.10
  - sort BR2_PACKAGE_ blocks in .mk alphabetically
]
Signed-off-by: Julien Olivain <ju.o@free.fr>
Signed-off-by: Chen Pei <cp0613@linux.alibaba.com>
Signed-off-by: Julien Olivain <ju.o@free.fr>
Since llama.cpp update in Buildroot commit [1], the test_aichat can
fail for several reasons:

The loop checking for the llama-server availability can fail if curl
succeed, but the returned json data is not formatted as expected.
This can happen if the server is ready but the model is not completely
loaded. In that case, the server returns:

    {"error":{"message":"Loading model","type":"unavailable_error","code":503}}

This commit ignore Python KeyError exceptions while doing the
server test, to avoid failing if this message is received.

Also, this new llama-server version introduced a prompt caching, which
uses too much memory. This commit completely disable this prompt
caching by adding "--cache-ram 0" in the llama-server options.

[1] https://gitlab.com/buildroot.org/buildroot/-/commit/05c36d5d875713521f99b7bad48be316dcde2510

Signed-off-by: Julien Olivain <ju.o@free.fr>
https://github.com/harfbuzz/harfbuzz/blob/13.2.1/NEWS

Signed-off-by: Giulio Benetti <giulio.benetti@benettiengineering.com>
Signed-off-by: Julien Olivain <ju.o@free.fr>
@pull pull bot locked and limited conversation to collaborators Mar 19, 2026
@pull pull bot added the ⤵️ pull label Mar 19, 2026
@pull pull bot merged commit d5ff979 into mir-one:master Mar 19, 2026
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants